Linear Mapping
Definition : Let V V V and W W W be linear spaces over the same field F F F . A mapping T : V → W \mathcal{T}: V \rightarrow W T : V → W is called a linear mapping satisfying
T ( a x + b y ) = a T ( x ) + b T ( y )       ∀ x , y ∈ V       ∀ a , b ∈ F \mathcal{T}(ax+by) = a\mathcal{T}(x) + b\mathcal{T}(y) \ \ \ \ \ \ \forall x,y \in V \ \ \ \ \ \ \forall a,b \in F T ( a x + b y ) = a T ( x ) + b T ( y )       ∀ x , y ∈ V       ∀ a , b ∈ F
here V V V is called the domain of T T T and W W W is called the codomain of T \mathcal{T} T .
Example : Let V = W V = W V = W polynomials of degree less than n n n in S S S ; T = d d s \mathcal{T} = \frac{d}{ds} T = d s d ​
Solution : Let p , q ∈ V p,q \in V p , q ∈ V and α 1 , α 2 ∈ F \alpha_1,\alpha_2 \in F α 1 ​ , α 2 ​ ∈ F then ,
p(s) = ∑ i = 0 n − 1 a i s i \sum_{i=0}^{n-1} a_is^i ∑ i = 0 n − 1 ​ a i ​ s i and q(s) = ∑ i = 0 n − 1 b i s i \sum_{i=0}^{n-1} b_is^i ∑ i = 0 n − 1 ​ b i ​ s i
α 1 p ( s ) + α 2 q ( s ) = ∑ i = 0 n − 1 ( α 1 a i + α 2 b i ) s i \alpha_1p(s) + \alpha_2q(s) = \sum_{i=0}^{n-1} (\alpha_1a_i + \alpha_2b_i)s^i α 1 ​ p ( s ) + α 2 ​ q ( s ) = ∑ i = 0 n − 1 ​ ( α 1 ​ a i ​ + α 2 ​ b i ​ ) s i
d d s ( α 1 p ( s ) + α 2 q ( s ) ) = ∑ i = 0 n − 1 ( α 1 a i + α 2 b i ) i s i − 1 = α 1 ∑ i = 0 n − 1 a i s i − 1 + α 2 ∑ i = 0 n − 1 b i s i − 1 = α 1 d p d s + α 2 d q d s \frac{d}{ds}(\alpha_1p(s) + \alpha_2q(s)) = \sum_{i=0}^{n-1} (\alpha_1a_i + \alpha_2b_i)is^{i-1} = \alpha_1\sum_{i=0}^{n-1} a_is^{i-1} + \alpha_2\sum_{i=0}^{n-1} b_is^{i-1} = \alpha_1\frac{dp}{ds} + \alpha_2\frac{dq}{ds} d s d ​ ( α 1 ​ p ( s ) + α 2 ​ q ( s )) = ∑ i = 0 n − 1 ​ ( α 1 ​ a i ​ + α 2 ​ b i ​ ) i s i − 1 = α 1 ​ ∑ i = 0 n − 1 ​ a i ​ s i − 1 + α 2 ​ ∑ i = 0 n − 1 ​ b i ​ s i − 1 = α 1 ​ d s d p ​ + α 2 ​ d s d q ​
T ( α 1 p + α 2 q ) = d d s ( α 1 p + α 2 q ) = α 1 d p d s + α 2 d q d s = α 1 T ( p ) + α 2 T ( q )  ■\mathcal{T}(\alpha_1p+\alpha_2q) = \frac{d}{ds}(\alpha_1p+\alpha_2q) = \alpha_1\frac{dp}{ds} + \alpha_2\frac{dq}{ds} = \alpha_1\mathcal{T}(p) + \alpha_2\mathcal{T}(q) \ \blacksquare T ( α 1 ​ p + α 2 ​ q ) = d s d ​ ( α 1 ​ p + α 2 ​ q ) = α 1 ​ d s d p ​ + α 2 ​ d s d q ​ = α 1 ​ T ( p ) + α 2 ​ T ( q )  â–
Example : Let V = W = R 2 V = W = \mathbb{R^2} V = W = R 2 . Let A \mathcal{A} A be defined as,
A = [ α 1 α 1 + α 2 ] where x = [ α 1 α 2 ] \mathcal{A} = \begin{bmatrix} \alpha_1 \\ \alpha_1 + \alpha_2 \end{bmatrix} \text{where } x = \begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix} A = [ α 1 ​ α 1 ​ + α 2 ​ ​ ] where x = [ α 1 ​ α 2 ​ ​ ]
Solution :
Let a , b ∈ F a,b \in F a , b ∈ F and x 1 , x 2 ∈ X x_1,x_2 \in X x 1 ​ , x 2 ​ ∈ X with x 1 = [ α 1 α 2 ] x_1 = \begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix} x 1 ​ = [ α 1 ​ α 2 ​ ​ ] and x 2 = [ β 1 β 2 ] x_2 = \begin{bmatrix} \beta_1 \\ \beta_2 \end{bmatrix} x 2 ​ = [ β 1 ​ β 2 ​ ​ ] then,
A ( a x 1 + b x 2 ) = A ( a [ α 1 α 2 ] + b [ β 1 β 2 ] ) = A ( [ a α 1 a α 2 ] + [ b β 1 b β 2 ] ) = A ( [ a α 1 + b β 1 a α 2 + b β 2 ] ) = [ a α 1 + b β 1 a α 1 + a α 2 + b β 1 + b β 2 ] = a [ α 1 α 1 + α 2 ] + b [ β 1 β 1 + β 2 ] = a A ( x 1 ) + b A ( x 2 )  ■\mathcal{A}(ax_1 + bx_2) = \mathcal{A}(a\begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix} + b\begin{bmatrix} \beta_1 \\ \beta_2 \end{bmatrix}) = \mathcal{A}(\begin{bmatrix} a\alpha_1 \\ a\alpha_2 \end{bmatrix} + \begin{bmatrix} b\beta_1 \\ b\beta_2 \end{bmatrix}) = \mathcal{A}(\begin{bmatrix} a\alpha_1 + b\beta_1 \\ a\alpha_2 + b\beta_2 \end{bmatrix}) = \begin{bmatrix} a\alpha_1 + b\beta_1 \\ a\alpha_1 + a\alpha_2 + b\beta_1 + b\beta_2 \end{bmatrix} = a\begin{bmatrix} \alpha_1 \\ \alpha_1 + \alpha_2 \end{bmatrix} + b\begin{bmatrix} \beta_1 \\ \beta_1 + \beta_2 \end{bmatrix} = a\mathcal{A}(x_1) + b\mathcal{A}(x_2) \ \blacksquare A ( a x 1 ​ + b x 2 ​ ) = A ( a [ α 1 ​ α 2 ​ ​ ] + b [ β 1 ​ β 2 ​ ​ ] ) = A ( [ a α 1 ​ a α 2 ​ ​ ] + [ b β 1 ​ b β 2 ​ ​ ] ) = A ( [ a α 1 ​ + b β 1 ​ a α 2 ​ + b β 2 ​ ​ ] ) = [ a α 1 ​ + b β 1 ​ a α 1 ​ + a α 2 ​ + b β 1 ​ + b β 2 ​ ​ ] = a [ α 1 ​ α 1 ​ + α 2 ​ ​ ] + b [ β 1 ​ β 1 ​ + β 2 ​ ​ ] = a A ( x 1 ​ ) + b A ( x 2 ​ )  â–
Example : Let V = W = R V = W = \mathbb{R} V = W = R . is  A x = ( 1 − x ) \ \mathcal{A}x = (1-x)  A x = ( 1 − x ) linear or not ?
Solution : Let a , b ∈ F a,b \in F a , b ∈ F and x 1 , x 2 ∈ X x_1,x_2 \in X x 1 ​ , x 2 ​ ∈ X then,
A ( a x 1 + b x 2 ) = ? a A ( x 1 ) + b A ( x 2 ) 1 − ( a x 1 + b x 2 ) = ? a ( 1 − x 1 ) + b ( 1 − x 2 ) 1 − a x 1 − b x 2 = ? a − a x 1 + b − b x 2 1 ≠a + b   ∀ a , b ∈ F  ■ hence not linear  \begin{align*}
\mathcal{A}(ax_1 + bx_2) &\stackrel{?}{=} a\mathcal{A}(x_1) + b\mathcal{A}(x_2) \\
1 - (ax_1 + bx_2) & \stackrel{?}{=} a(1-x_1) + b(1-x_2) \\
1 - ax_1 - bx_2 & \stackrel{?}{=} a - ax_1 + b - bx_2 \\
1 & \neq a + b \ \ \forall a,b \in F \ \blacksquare\\
& \text{ hence not linear } \
\end{align*} A ( a x 1 ​ + b x 2 ​ ) 1 − ( a x 1 ​ + b x 2 ​ ) 1 − a x 1 ​ − b x 2 ​ 1 ​ = ? a A ( x 1 ​ ) + b A ( x 2 ​ ) = ? a ( 1 − x 1 ​ ) + b ( 1 − x 2 ​ ) = ? a − a x 1 ​ + b − b x 2 ​ î€ = a + b   ∀ a , b ∈ F  ■ hence not linear  ​
Rotation transformations in R 2 \mathbb{R}^2 R 2 are linear transformations.
Integration and differentiation are linear transformations.
Definition : Given a linear mapping T : V → W \mathcal{T}: V \rightarrow W T : V → W , the set of all vectors x ∈ V x \in V x ∈ V such that T ( x ) = 0 W \mathcal{T}(x) = 0_W T ( x ) = 0 W ​ is called the null space of T \mathcal{T} T and is denoted by N ( T ) N(\mathcal{T}) N ( T ) . That is,
N ( T ) : = { x ∈ V  :  T ( x ) = 0 W } N(\mathcal{T}) := \{x \in V \ : \ \mathcal{T}(x) = 0_W\} N ( T ) := { x ∈ V  :  T ( x ) = 0 W ​ }
Definition : Given a linear mapping T : V → W \mathcal{T}: V \rightarrow W T : V → W , the set of all vectors w ∈ W w \in W w ∈ W such that w = T ( v ) w = \mathcal{T}(v) w = T ( v ) for some v ∈ V v \in V v ∈ V is called the range of T \mathcal{T} T and is denoted by R ( T ) R(\mathcal{T}) R ( T ) . That is,
R ( T ) : = { w ∈ W  :  w = T ( v )  for some v ∈ V } R(\mathcal{T}) := \{w \in W \ : \ w = \mathcal{T}(v) \ \text{for some} \ v \in V\} R ( T ) := { w ∈ W  :  w = T ( v )  for some  v ∈ V }
Claim : For a given linear mapping T : V → W \mathcal{T}: V \rightarrow W T : V → W , N ( T ) N(\mathcal{T}) N ( T ) is a linear subspace of V V V .
Proof : Let x 1 , x 2 ∈ N ( T ) x_1,x_2 \in N(\mathcal{T}) x 1 ​ , x 2 ​ ∈ N ( T ) and a ∈ F a \in F a ∈ F show,
(S1). x 1 + x 2 ∈ N ( T ) x_1 + x_2 \in N(\mathcal{T}) x 1 ​ + x 2 ​ ∈ N ( T )
(S2). a x 1 ∈ N ( T ) ax_1 \in N(\mathcal{T}) a x 1 ​ ∈ N ( T )
1- T ( x 1 + x 2 ) = T ( x 1 ) + T ( x 2 ) = 0 W + 0 W = 0 W    ⟹    x 1 + x 2 ∈ N ( T ) \mathcal{T}(x_1 + x_2) = \mathcal{T}(x_1) + \mathcal{T}(x_2) = 0_W + 0_W = 0_W \implies x_1 + x_2 \in N(\mathcal{T}) T ( x 1 ​ + x 2 ​ ) = T ( x 1 ​ ) + T ( x 2 ​ ) = 0 W ​ + 0 W ​ = 0 W ​ ⟹ x 1 ​ + x 2 ​ ∈ N ( T )
2- T ( a x 1 ) = a T ( x 1 ) = a 0 W = 0 W    ⟹    a x 1 ∈ N ( T )  ■\mathcal{T}(ax_1) = a\mathcal{T}(x_1) = a0_W = 0_W \implies ax_1 \in N(\mathcal{T}) \ \blacksquare T ( a x 1 ​ ) = a T ( x 1 ​ ) = a 0 W ​ = 0 W ​ ⟹ a x 1 ​ ∈ N ( T )  â–
Claim : For a given linear mapping T : V → W \mathcal{T}: V \rightarrow W T : V → W , R ( T ) R(\mathcal{T}) R ( T ) is a subspace of W W W .
Proof : Let x 1 , x 2 ∈ R ( T ) x_1,x_2 \in R(\mathcal{T}) x 1 ​ , x 2 ​ ∈ R ( T ) and a ∈ F a \in F a ∈ F show,
(S1). x 1 + x 2 ∈ R ( T ) x_1 + x_2 \in R(\mathcal{T}) x 1 ​ + x 2 ​ ∈ R ( T )
(S2). a x 1 ∈ R ( T ) ax_1 \in R(\mathcal{T}) a x 1 ​ ∈ R ( T )
Definition : A linear transformation T : V → W \mathcal{T}: V \rightarrow W T : V → W is called one-to-one if x 1 ≠x 2 x_1 \neq x_2 x 1 ​ î€ = x 2 ​ implies T ( x 1 ) ≠T ( x 2 ) \mathcal{T}(x_1) \neq \mathcal{T}(x_2) T ( x 1 ​ ) î€ = T ( x 2 ​ ) for all x 1 , x 2 ∈ V x_1,x_2 \in V x 1 ​ , x 2 ​ ∈ V .
Theorem : Let T : V → W \mathcal{T}: V \rightarrow W T : V → W be a linear transformation. Then mapping T \mathcal{T} T is one-to-one if and only if N ( T ) = { 0 V } N(\mathcal{T}) = \{0_V\} N ( T ) = { 0 V ​ } .
Proof : We will prove the statement by contrapositive. Since it is an if and only if statement, we will prove both directions.
(Bacward direction) Assume that N ( T ) = { 0 V } N(\mathcal{T}) = \{0_V\} N ( T ) = { 0 V ​ } and T ( x 1 ) = T ( x 2 ) \mathcal{T}(x_1) = \mathcal{T}(x_2) T ( x 1 ​ ) = T ( x 2 ​ ) for some x 1 , x 2 ∈ V x_1,x_2 \in V x 1 ​ , x 2 ​ ∈ V . Then,
T ( x 1 ) − T ( x 2 ) = 0 W \mathcal{T}(x_1) - \mathcal{T}(x_2) = 0_W T ( x 1 ​ ) − T ( x 2 ​ ) = 0 W ​
T ( x 1 − x 2 ) = 0 W \mathcal{T}(x_1 - x_2) = 0_W T ( x 1 ​ − x 2 ​ ) = 0 W ​
x 1 − x 2 ∈ N ( T ) x_1 - x_2 \in N(\mathcal{T}) x 1 ​ − x 2 ​ ∈ N ( T )
x 1 − x 2 = 0 V x_1 - x_2 = 0_V x 1 ​ − x 2 ​ = 0 V ​
x 1 = x 2 x_1 = x_2 x 1 ​ = x 2 ​
T \mathcal{T} T is one-to-one.
(Forward direction) Assume that T \mathcal{T} T is one-to-one and x ∈ N ( T ) x \in N(\mathcal{T}) x ∈ N ( T ) . Then,
T ( x ) = 0 W \mathcal{T}(x) = 0_W T ( x ) = 0 W ​
T ( 0 V ) = 0 W \mathcal{T}(0_V) = 0_W T ( 0 V ​ ) = 0 W ​
x = 0 V x = 0_V x = 0 V ​
N ( T ) = { 0 V } N(\mathcal{T}) = \{0_V\} N ( T ) = { 0 V ​ } â– \blacksquare â–
Definition : A linear transformation T : V → W \mathcal{T}: V \rightarrow W T : V → W is called onto if R ( T ) = W R(\mathcal{T}) = W R ( T ) = W , otherwise if R ( T ) ⊂ W R(\mathcal{T}) \subset W R ( T ) ⊂ W then T \mathcal{T} T is called into.
Example : Let V : = { f : [ 0 , 1 ] → R  and f is integrable } V := \{f:[0, 1] \rightarrow \mathbb{R} \text{ and f is integrable} \} V := { f : [ 0 , 1 ] → R  and f is integrable } . A transformation A : V → R \mathcal{A}: V \rightarrow \mathbb{R} A : V → R is defined as,
A ( f ( s ) ) = ∫ 0 1 f ( s ) d s is A  one-to-one ? \mathcal{A}(f(s)) = \int_{0}^{1} f(s)ds \\
\text{is $\mathcal{A} $ one-to-one ?} A ( f ( s )) = ∫ 0 1 ​ f ( s ) d s is A  one-to-one ? Solution : Integration operation resulting in one-to-one transformation probably not true. Hence we can exploit the fact that the integration might result in zero.
Let $f(s) = 2s-1 then,
A ( f ( s ) ) = ∫ 0 1 ( 2 s − 1 ) d s = [ s 2 − s ] 0 1 = 0 \mathcal{A}(f(s)) = \int_{0}^{1} (2s-1)ds = \left[s^2-s\right]_{0}^{1} = 0 A ( f ( s )) = ∫ 0 1 ​ ( 2 s − 1 ) d s = [ s 2 − s ] 0 1 ​ = 0
A ( f ( s ) ) = 0 \mathcal{A}(f(s)) = 0 A ( f ( s )) = 0
Then A ( 0 ) = 0 w \mathcal{A}(0) = 0_w A ( 0 ) = 0 w ​ and A ( f ( s ) ) = 0 w \mathcal{A}(f(s)) = 0_w A ( f ( s )) = 0 w ​ for some f ( s ) ≠0 f(s) \neq 0 f ( s ) î€ = 0 .
A \mathcal{A} A is not one-to-one.
moreover,
Let $f(s) = a then,
A ( f ( s ) ) = ∫ 0 1 a d s = [ a s ] 0 1 = a \mathcal{A}(f(s)) = \int_{0}^{1} a ds = \left[as\right]_{0}^{1} = a A ( f ( s )) = ∫ 0 1 ​ a d s = [ a s ] 0 1 ​ = a
Shows that A \mathcal{A} A is onto.
Matrix Representations
Definition : Let T : V → W \mathcal{T}: V \rightarrow W T : V → W be a linear transformation with d i m ( V ) = n dim(V) = n d im ( V ) = n and d i m ( W ) = m dim(W) = m d im ( W ) = m . Let B = { v 1 , v 2 , . . . , v n } \mathcal{B} = \{v_1,v_2,...,v_n\} B = { v 1 ​ , v 2 ​ , ... , v n ​ } be a basis for V V V and C = { w 1 , w 2 , . . . , w m } \mathcal{C} = \{w_1,w_2,...,w_m\} C = { w 1 ​ , w 2 ​ , ... , w m ​ } be a basis for W W W . Then, the matrix representation of T \mathcal{T} T with respect to B \mathcal{B} B and C \mathcal{C} C is the m × n m \times n m × n matrix A A A such that,
[ w ] c = [ w 1 w 2 ⋮ w m ] [ v ] b = [ v 1 v 2 ⋮ v n ] [w]_c = \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_m \end{bmatrix} [v]_b = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} [ w ] c ​ = ⎣ ⎡ ​ w 1 ​ w 2 ​ ⋮ w m ​ ​ ⎦ ⎤ ​ [ v ] b ​ = ⎣ ⎡ ​ v 1 ​ v 2 ​ ⋮ v n ​ ​ ⎦ ⎤ ​
[ T ] B C = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] [\mathcal{T}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} [ T ] B C ​ = ⎣ ⎡ ​ a 11 ​ a 21 ​ ⋮ a m 1 ​ ​ a 12 ​ a 22 ​ ⋮ a m 2 ​ ​ ⋯ ⋯ ⋱ ⋯ ​ a 1 n ​ a 2 n ​ ⋮ a mn ​ ​ ⎦ ⎤ ​
T ( v j ) = ∑ i = 1 m a i j w i  for j = 1 , 2 , . . . , n \mathcal{T}(v_j) = \sum_{i=1}^{m} a_{ij}w_i \ \text{for} \ j = 1,2,...,n T ( v j ​ ) = i = 1 ∑ m ​ a ij ​ w i ​  for  j = 1 , 2 , ... , n
Remark : The matrix representation of T \mathcal{T} T with respect to B \mathcal{B} B and C \mathcal{C} C is denoted by [ T ] B C [\mathcal{T}]_{\mathcal{B}}^{\mathcal{C}} [ T ] B C ​ .
Now we have a transformation represented as,
[ w ] c = [ T ] B C [ v ] b [w]_c = [\mathcal{T}]_{\mathcal{B}}^{\mathcal{C}} [v]_b [ w ] c ​ = [ T ] B C ​ [ v ] b ​
Take each basis vector v j v_j v j ​ in B \mathcal{B} B
Apply A \mathcal{A} A to v j v_j v j ​ : A ( v j ) \mathcal{A}(v_j) A ( v j ​ )
Express the result in terms of the basis vectors in C \mathcal{C} C : A ( v j ) = ∑ i = 1 m a i j w i \mathcal{A}(v_j) = \sum_{i=1}^{m} a_{ij}w_i A ( v j ​ ) = ∑ i = 1 m ​ a ij ​ w i ​
The j j j th column of [ A ] B C [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} [ A ] B C ​ is the vector [ a 1 j a 2 j ⋮ a m j ] \begin{bmatrix} a_{1j} \\ a_{2j} \\ \vdots \\ a_{mj} \end{bmatrix} ⎣ ⎡ ​ a 1 j ​ a 2 j ​ ⋮ a mj ​ ​ ⎦ ⎤ ​
Example : V = { P o l y n o m i a l s  o f  d e g r e e  l e s s  t h a n  3 } V = \{Polynomials \ of \ degree \ less \ than \ 3\} V = { P o l y n o mia l s  o f  d e g ree  l ess  t han  3 } and W = { P o l y n o m i a l s  o f  d e g r e e  l e s s  t h a n  2 } W = \{Polynomials \ of \ degree \ less \ than \ 2\} W = { P o l y n o mia l s  o f  d e g ree  l ess  t han  2 }
Let A : V → W \mathcal{A}: V \rightarrow W A : V → W be defined as,
A ( p ( s ) ) = d p ( s ) d s \mathcal{A}(p(s)) = \frac{dp(s)}{ds} A ( p ( s )) = d s d p ( s ) ​
Find the matrix representation of A \mathcal{A} A with respect to the bases B = { 1 , 1 + s , 1 + s + s 2 , 1 + s + s 2 + s 3 } \mathcal{B} = \{1, 1+s, 1+s+s^2, 1+s+s^2+s^3\} B = { 1 , 1 + s , 1 + s + s 2 , 1 + s + s 2 + s 3 } and C = { 1 , 1 + s , 1 + s + s 2 } \mathcal{C} = \{1, 1+s, 1+s+s^2\} C = { 1 , 1 + s , 1 + s + s 2 } .
Solution : [ w ] c = [ w 1 w 2 w 3 ]  and [ v ] b = [ v 1 v 2 v 3 v 4 ] [w]_c = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} \ \text{and} \ [v]_b = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix} [ w ] c ​ = ⎣ ⎡ ​ w 1 ​ w 2 ​ w 3 ​ ​ ⎦ ⎤ ​  and  [ v ] b ​ = ⎣ ⎡ ​ v 1 ​ v 2 ​ v 3 ​ v 4 ​ ​ ⎦ ⎤ ​
A ( v 1 ) = d d s ( 1 ) = 0 = 0 w 1 + 0 w 2 + 0 w 3 A ( v 2 ) = d d s ( 1 + s ) = 1 = 1 w 1 + 0 w 2 + 0 w 3 A ( v 3 ) = d d s ( 1 + s + s 2 ) = 1 + 2 s = − 1 w 1 + 2 w 2 + 0 w 3 A ( v 4 ) = d d s ( 1 + s + s 2 + s 3 ) = 1 + 2 s + 3 s 2 = − 1 w 1 − 1 w 2 + 3 w 3 \begin{align*}
\mathcal{A}(v_1) &= \frac{d}{ds}(1) = 0 = 0w_1 + 0w_2 + 0w_3 \\
\mathcal{A}(v_2) &= \frac{d}{ds}(1+s) = 1 = 1w_1 + 0w_2 + 0w_3 \\
\mathcal{A}(v_3) &= \frac{d}{ds}(1+s+s^2) = 1+2s = -1w_1 + 2w_2 + 0w_3 \\
\mathcal{A}(v_4) &= \frac{d}{ds}(1+s+s^2+s^3) = 1+2s+3s^2 = -1w_1 -1w_2 + 3w_3 \\
\end{align*} A ( v 1 ​ ) A ( v 2 ​ ) A ( v 3 ​ ) A ( v 4 ​ ) ​ = d s d ​ ( 1 ) = 0 = 0 w 1 ​ + 0 w 2 ​ + 0 w 3 ​ = d s d ​ ( 1 + s ) = 1 = 1 w 1 ​ + 0 w 2 ​ + 0 w 3 ​ = d s d ​ ( 1 + s + s 2 ) = 1 + 2 s = − 1 w 1 ​ + 2 w 2 ​ + 0 w 3 ​ = d s d ​ ( 1 + s + s 2 + s 3 ) = 1 + 2 s + 3 s 2 = − 1 w 1 ​ − 1 w 2 ​ + 3 w 3 ​ ​
[ A ] B C = [ 0 1 − 1 − 1 0 0 2 − 1 0 0 0 3 ] [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} [ A ] B C ​ = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ − 1 2 0 ​ − 1 − 1 3 ​ ⎦ ⎤ ​
The full matrix representation of A \mathcal{A} A is,
[ A ] B C = [ 0 1 − 1 − 1 0 0 2 − 1 0 0 0 3 ] [ 1 1 1 1 0 1 1 1 0 0 1 1 ] = [ 0 1 0 0 0 0 2 1 0 0 0 3 ] [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 3 \end{bmatrix} [ A ] B C ​ = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ − 1 2 0 ​ − 1 − 1 3 ​ ⎦ ⎤ ​ ⎣ ⎡ ​ 1 0 0 ​ 1 1 0 ​ 1 1 1 ​ 1 1 1 ​ ⎦ ⎤ ​ = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ 0 2 0 ​ 0 1 3 ​ ⎦ ⎤ ​
Example : Let V = R 2 V = \mathbb{R}^2 V = R 2 and A : V → V \mathcal{A}: V \rightarrow V A : V → V be defined as,
A ( x ) = [ 0 1 − 1 0 ] x + x [ 0 − 1 1 0 ] \mathcal{A}(x) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}x + x \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} A ( x ) = [ 0 − 1 ​ 1 0 ​ ] x + x [ 0 1 ​ − 1 0 ​ ]
Find the matrix representation of A \mathcal{A} A with respect to the bases
B = { [ 1 0 0 0 ] , [ 0 1 0 0 ] , [ 0 0 1 0 ] , [ 0 0 0 1 ] } \mathcal{B} = \{\begin {bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \begin {bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\} B = { [ 1 0 ​ 0 0 ​ ] , [ 0 0 ​ 1 0 ​ ] , [ 0 1 ​ 0 0 ​ ] , [ 0 0 ​ 0 1 ​ ] } . and
C = { [ 1 0 0 0 ] , [ 1 1 0 0 ] , [ 1 1 1 0 ] , [ 1 1 1 1 ] } \mathcal{C} = \{\begin {bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}, \begin {bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}\} C = { [ 1 0 ​ 0 0 ​ ] , [ 1 0 ​ 1 0 ​ ] , [ 1 1 ​ 1 0 ​ ] , [ 1 1 ​ 1 1 ​ ] } .
Solution : [ w ] c = [ w 1 w 2 w 3 w 4 ]  and [ v ] b = [ v 1 v 2 v 3 v 4 ] [w]_c = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \\ w_4 \end{bmatrix} \ \text{and} \ [v]_b = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix} [ w ] c ​ = ⎣ ⎡ ​ w 1 ​ w 2 ​ w 3 ​ w 4 ​ ​ ⎦ ⎤ ​  and  [ v ] b ​ = ⎣ ⎡ ​ v 1 ​ v 2 ​ v 3 ​ v 4 ​ ​ ⎦ ⎤ ​
A ( v 1 ) = [ 0 1 − 1 0 ] [ 1 0 0 0 ] + [ 1 0 0 0 ] [ 0 − 1 1 0 ] = [ 0 0 − 1 0 ] + [ 0 − 1 0 0 ] = [ 0 − 1 − 1 0 ] = 1 w 1 + 0 w 2 − 1 w 3 + 0 w 4 A ( v 2 ) = [ 0 1 − 1 0 ] [ 0 1 0 0 ] + [ 0 1 0 0 ] [ 0 − 1 1 0 ] = [ 0 0 0 − 1 ] + [ 0 − 1 0 0 ] = [ 1 0 0 − 1 ] = 1 w 1 + 0 w 2 + 1 w 3 − 1 w 4 A ( v 3 ) = [ 0 1 − 1 0 ] [ 0 0 1 0 ] + [ 0 0 1 0 ] [ 0 − 1 1 0 ] = [ 0 0 0 − 1 ] + [ 0 0 1 0 ] = [ 1 0 0 − 1 ] = 1 w 1 + 0 w 2 + 1 w 3 − 1 w 4 A ( v 4 ) = [ 0 1 − 1 0 ] [ 0 0 0 1 ] + [ 0 0 0 1 ] [ 0 − 1 1 0 ] = [ 0 0 − 1 0 ] + [ 0 − 1 0 0 ] = [ 0 1 1 0 ] = − 1 w 1 + 0 w 2 + 1 w 3 + 0 w 4 \begin{align*}
\mathcal{A}(v_1) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ -1 & 0 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -1 \\ -1 & 0 \end{bmatrix} = 1w_1 +0w_2 -1w_3 +0w_4 \\
\mathcal{A}(v_2) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = 1w_1 +0w_2 +1w_3 -1w_4 \\
\mathcal{A}(v_3) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = 1w_1 +0w_2 +1w_3 -1w_4 \\
\mathcal{A}(v_4) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ -1 & 0 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = -1w_1 +0w_2 +1w_3 +0w_4 \\
\end{align*} A ( v 1 ​ ) A ( v 2 ​ ) A ( v 3 ​ ) A ( v 4 ​ ) ​ = [ 0 − 1 ​ 1 0 ​ ] [ 1 0 ​ 0 0 ​ ] + [ 1 0 ​ 0 0 ​ ] [ 0 1 ​ − 1 0 ​ ] = [ 0 − 1 ​ 0 0 ​ ] + [ 0 0 ​ − 1 0 ​ ] = [ 0 − 1 ​ − 1 0 ​ ] = 1 w 1 ​ + 0 w 2 ​ − 1 w 3 ​ + 0 w 4 ​ = [ 0 − 1 ​ 1 0 ​ ] [ 0 0 ​ 1 0 ​ ] + [ 0 0 ​ 1 0 ​ ] [ 0 1 ​ − 1 0 ​ ] = [ 0 0 ​ 0 − 1 ​ ] + [ 0 0 ​ − 1 0 ​ ] = [ 1 0 ​ 0 − 1 ​ ] = 1 w 1 ​ + 0 w 2 ​ + 1 w 3 ​ − 1 w 4 ​ = [ 0 − 1 ​ 1 0 ​ ] [ 0 1 ​ 0 0 ​ ] + [ 0 1 ​ 0 0 ​ ] [ 0 1 ​ − 1 0 ​ ] = [ 0 0 ​ 0 − 1 ​ ] + [ 0 1 ​ 0 0 ​ ] = [ 1 0 ​ 0 − 1 ​ ] = 1 w 1 ​ + 0 w 2 ​ + 1 w 3 ​ − 1 w 4 ​ = [ 0 − 1 ​ 1 0 ​ ] [ 0 0 ​ 0 1 ​ ] + [ 0 0 ​ 0 1 ​ ] [ 0 1 ​ − 1 0 ​ ] = [ 0 − 1 ​ 0 0 ​ ] + [ 0 0 ​ − 1 0 ​ ] = [ 0 1 ​ 1 0 ​ ] = − 1 w 1 ​ + 0 w 2 ​ + 1 w 3 ​ + 0 w 4 ​ ​
[ A ] B C = [ 1 1 1 − 1 0 0 0 0 − 1 1 1 1 0 − 1 − 1 0 ] [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} 1 & 1 & 1 & -1 \\ 0 & 0 & 0 & 0 \\ -1 & 1 & 1 & 1 \\ 0 & -1 & -1 & 0 \end{bmatrix} [ A ] B C ​ = ⎣ ⎡ ​ 1 0 − 1 0 ​ 1 0 1 − 1 ​ 1 0 1 − 1 ​ − 1 0 1 0 ​ ⎦ ⎤ ​
Change of Basis
Definition : Let B = { v 1 , v 2 , . . . , v n } \mathcal{B} = \{v_1,v_2,...,v_n\} B = { v 1 ​ , v 2 ​ , ... , v n ​ } and C = { w 1 , w 2 , . . . , w n } \mathcal{C} = \{w_1,w_2,...,w_n\} C = { w 1 ​ , w 2 ​ , ... , w n ​ } be two bases for a linear space V V V . The change of basis matrix from B \mathcal{B} B to C \mathcal{C} C is the n × n n \times n n × n matrix P P P such that,
[ w ] C = A [ v ] B [w]_C = A [v]_B [ w ] C ​ = A [ v ] B ​
[ w ] C = A ˉ [ v ] B ˉ [w]_C = \bar{A} [v]_{\bar{B}} [ w ] C ​ = A ˉ [ v ] B ˉ ​
We know that a change of basis is a linear transformation. Hence,
[ v ] B = P [ v ] B ˉ [v]_B = P [v]_{\bar{B}} [ v ] B ​ = P [ v ] B ˉ ​
[ w ] C = A P [ v ] B ˉ [w]_C = A P [v]_{\bar{B}} [ w ] C ​ = A P [ v ] B ˉ ​
in codomain perspective,
[ w ] C = Q [ w ] C ˉ [w]_C = Q [w]_{\bar{C}} [ w ] C ​ = Q [ w ] C ˉ ​
[ w ] C ˉ = Q − 1 A [ v ] B [w]_{\bar{C}} = Q^{-1}A [v]_{B} [ w ] C ˉ ​ = Q − 1 A [ v ] B ​
[ w ] C ˉ = Q − 1 A P [ v ] B ˉ [w]_{\bar{C}} = Q^{-1}A P [v]_{\bar{B}} [ w ] C ˉ ​ = Q − 1 A P [ v ] B ˉ ​
Example :
V = { Polynomials with degree less than 3 } W = { Polynomials with degree less than 2 } B = { 1 , 1 + s , 1 + s + s 2 , 1 + s + s 2 + s 3 } C = { 1 , 1 + s , 1 + s + s 2 } A = [ 0 1 − 1 − 1 0 0 2 − 1 0 0 0 3 ] B ˉ = { 1 , s , s 2 , s 3 } \begin{align*}
V &= \{ \text{Polynomials with degree less than 3} \} \\
W &= \{ \text{Polynomials with degree less than 2} \} \\
\mathcal{B} &= \{1, 1+s, 1+s+s^2, 1+s+s^2+s^3\} \\
\mathcal{C} &= \{1, 1+s, 1+s+s^2\} \\
A &= \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} \\
\bar{B} &= \{1,s,s^2,s^3\} \\
\end{align*} V W B C A B ˉ ​ = { Polynomials with degree less than 3 } = { Polynomials with degree less than 2 } = { 1 , 1 + s , 1 + s + s 2 , 1 + s + s 2 + s 3 } = { 1 , 1 + s , 1 + s + s 2 } = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ − 1 2 0 ​ − 1 − 1 3 ​ ⎦ ⎤ ​ = { 1 , s , s 2 , s 3 } ​
Solution : First we will find the change of basis in the domain matrix from B \mathcal{B} B to B ˉ \mathcal{\bar{B}} B ˉ . That is more clearly stated as,
[ w ] C = A ˉ [ v ] B ˉ [w]_C = \bar{A} [v]_{\bar{B}} [ w ] C ​ = A ˉ [ v ] B ˉ ​
and given [ v ] B = P [ v ] B ˉ [v]_B = P [v]_{\bar{B}} [ v ] B ​ = P [ v ] B ˉ ​ , A ˉ \bar{A} A ˉ is equal to,
[ w ] C = A P [ v ] B ˉ [w]_C = A P [v]_{\bar{B}} [ w ] C ​ = A P [ v ] B ˉ ​
In order to find P P P we need to write the basis vectors in B ˉ \mathcal{\bar{B}} B ˉ in terms of B \mathcal{B} B .
1 = 1 ( 1 ) + 0 ( 1 + s ) + 0 ( 1 + s + s 2 ) + 0 ( 1 + s + s 2 + s 3 ) s = − 1 ( 1 ) + 1 ( 1 + s ) + 0 ( 1 + s + s 2 ) + 0 ( 1 + s + s 2 + s 3 ) s 2 = 0 ( 1 ) + − 1 ( 1 + s ) + 1 ( 1 + s + s 2 ) + 0 ( 1 + s + s 2 + s 3 ) s 3 = 0 ( 1 ) + 0 ( 1 + s ) + − 1 ( 1 + s + s 2 ) + 1 ( 1 + s + s 2 + s 3 ) \begin{align*}
1 &= 1(1) + 0(1+s) + 0(1+s+s^2) + 0(1+s+s^2+s^3) \\
s &= -1(1) + 1(1+s) + 0(1+s+s^2) + 0(1+s+s^2+s^3) \\
s^2 &= 0(1) + -1(1+s) + 1(1+s+s^2) + 0(1+s+s^2+s^3) \\
s^3 &= 0(1) + 0(1+s) + -1(1+s+s^2) + 1(1+s+s^2+s^3) \\
\end{align*} 1 s s 2 s 3 ​ = 1 ( 1 ) + 0 ( 1 + s ) + 0 ( 1 + s + s 2 ) + 0 ( 1 + s + s 2 + s 3 ) = − 1 ( 1 ) + 1 ( 1 + s ) + 0 ( 1 + s + s 2 ) + 0 ( 1 + s + s 2 + s 3 ) = 0 ( 1 ) + − 1 ( 1 + s ) + 1 ( 1 + s + s 2 ) + 0 ( 1 + s + s 2 + s 3 ) = 0 ( 1 ) + 0 ( 1 + s ) + − 1 ( 1 + s + s 2 ) + 1 ( 1 + s + s 2 + s 3 ) ​
P = [ 1 − 1 0 0 0 1 − 1 0 0 0 1 − 1 0 0 0 1 ] P = \begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{bmatrix} P = ⎣ ⎡ ​ 1 0 0 0 ​ − 1 1 0 0 ​ 0 − 1 1 0 ​ 0 0 − 1 1 ​ ⎦ ⎤ ​
now A ˉ \bar{A} A ˉ is equal to,
A ˉ = A P = [ 0 1 − 1 − 1 0 0 2 − 1 0 0 0 3 ] [ 1 − 1 0 0 0 1 − 1 0 0 0 1 − 1 0 0 0 1 ] = [ 0 1 − 2 0 0 0 2 − 3 0 0 0 3 ] \bar{A} = A P = \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} \begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 & -2 & 0 \\ 0 & 0 & 2 & -3 \\ 0 & 0 & 0 & 3 \end{bmatrix} A ˉ = A P = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ − 1 2 0 ​ − 1 − 1 3 ​ ⎦ ⎤ ​ ⎣ ⎡ ​ 1 0 0 0 ​ − 1 1 0 0 ​ 0 − 1 1 0 ​ 0 0 − 1 1 ​ ⎦ ⎤ ​ = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ − 2 2 0 ​ 0 − 3 3 ​ ⎦ ⎤ ​
Now we will change the basis in the codomain to canonical basis while keeping the basis of the domain as also in canonical form. That is,
[ w ] C ˉ = Q − 1 A P [ v ] B ˉ [w]_{\bar{C}} = Q^{-1} A P [v]_{\bar{B}} [ w ] C ˉ ​ = Q − 1 A P [ v ] B ˉ ​
[ w ] C = Q [ w ] C ˉ [w]_C = Q [w]_{\bar{C}} [ w ] C ​ = Q [ w ] C ˉ ​
[ w ] C ˉ = Q − 1 [ w ] C [w]_{\bar{C}} = Q^{-1} [w]_C [ w ] C ˉ ​ = Q − 1 [ w ] C ​
In order to find Q − 1 Q^{-1} Q − 1 in a single step, we can write the basis vectors in C \mathcal{C} C in terms of C ˉ \mathcal{\bar{C}} C ˉ .
1 = 1 ( 1 ) + 0 ( s ) + 0 ( s 2 ) 1 + s = 1 ( 1 ) + 1 ( s ) + 0 ( s 2 ) 1 + s + s 2 = 1 ( 1 ) + 1 ( s ) + 1 ( s 2 ) \begin{align*}
1 &= 1(1) + 0(s) + 0(s^2) \\
1+s &= 1(1) + 1(s) + 0(s^2) \\
1+s+s^2 &= 1(1) + 1(s) + 1(s^2) \\
\end{align*} 1 1 + s 1 + s + s 2 ​ = 1 ( 1 ) + 0 ( s ) + 0 ( s 2 ) = 1 ( 1 ) + 1 ( s ) + 0 ( s 2 ) = 1 ( 1 ) + 1 ( s ) + 1 ( s 2 ) ​
Q − 1 = [ 1 1 1 0 1 1 0 0 1 ] Q^{-1} = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} Q − 1 = ⎣ ⎡ ​ 1 0 0 ​ 1 1 0 ​ 1 1 1 ​ ⎦ ⎤ ​
as the final steps,
[ w ] C ˉ = Q − 1 A ˉ [ v ] B ˉ [w]_{\bar{C}} = Q^{-1} \bar{A} [v]_{\bar{B}} [ w ] C ˉ ​ = Q − 1 A ˉ [ v ] B ˉ ​
Q − 1 A ˉ = [ 1 1 1 0 1 1 0 0 1 ] [ 0 1 − 2 0 0 0 2 − 3 0 0 0 3 ] = [ 0 1 0 0 0 0 2 0 0 0 0 3 ] Q^{-1} \bar{A} = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 & -2 & 0 \\ 0 & 0 & 2 & -3 \\ 0 & 0 & 0 & 3 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{bmatrix} Q − 1 A ˉ = ⎣ ⎡ ​ 1 0 0 ​ 1 1 0 ​ 1 1 1 ​ ⎦ ⎤ ​ ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ − 2 2 0 ​ 0 − 3 3 ​ ⎦ ⎤ ​ = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ 0 2 0 ​ 0 0 3 ​ ⎦ ⎤ ​
[ w ] C ˉ = [ 0 1 0 0 0 0 2 0 0 0 0 3 ] [ v ] B ˉ [w]_{\bar{C}} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{bmatrix} [v]_{\bar{B}} [ w ] C ˉ ​ = ⎣ ⎡ ​ 0 0 0 ​ 1 0 0 ​ 0 2 0 ​ 0 0 3 ​ ⎦ ⎤ ​ [ v ] B ˉ ​
Given the matrix representation of a linear transformation A : V → W \mathcal{A}:V \rightarrow W A : V → W with respect to bases B \mathcal{B} B and C \mathcal{C} C , one can draw the following diagram.
graph LR
A[<b>Matrix Representation</b>] --> B[<b>Change of Basis</b>]
B --> C[<b>Matrix Representation</b>]
C --> D[<b>Change of Basis</b>]
D --> E[<b>Matrix Representation</b>]
#EE501 - Linear Systems Theory at METU